Jeremy Kedziora, Ph.D.

Associate Professor

  • Milwaukee WI UNITED STATES

Dr. Jeremy Kedziora is the PieperPower Endowed Chair of Artificial Intelligence at MSOE.

Contact

Spotlight

4 min

Ask an Expert: Is the "AI Moratorium" too far reaching?

Recent responses to chatGPT have featured eminent technologists calling for a six-month moratorium on the development of “AI systems more powerful than GPT-4.” Dr. Jeremy Kedziora, PieperPower Endowed Chair in Artificial Intelligence at Milwaukee School of Engineering, supports a middle ground approach between unregulated development and a pause. He says, "I do not agree with a moratorium, but I would call for government action to develop regulatory guidelines for AI use, particularly for endowing AIs with actions."Dr. Kedziora is available as a subject matter expert on the recent "AI moratorium" that was issued by tech leaders. According to Dr. Kedziora:There are good reasons to call for additional oversight of AI creation:Large deep or reinforcement learning systems encode complicated relationships that are difficult for users to predict and understand. Integrating them into daily use by billions of people implies some sort of complex adaptive system in which it is even more difficult for planners to anticipate, predict, and plan. This is likely fertile ground for unintended – and bad – outcomes.Rather than outright replacement, a very real possibility is that AI-enabled workers will have sufficiently high productivity that we’ll need less workers to accomplish tasks. The implication is that there won’t be enough jobs for those who want them. This means that governments will need to seriously consider proposals for UBI and work to limit economic displacement, work which will require time and political bargaining.I do not think it is controversial that we would not want a research group at MIT or CalTech, or anywhere developing an unregulated nuclear weapon. Given the difficulty in predicting its impact, AI may well be in the same category of powerful, suggesting that its creation should be subject to the democratic process.At the same time, there are some important things to keep in mind regarding chatGPT-like AI systems that suggest there are inherent limits to their impact:Though chatGPT may appear–at times–to pass the famous Turing test, this does not imply these systems ’think,’ or are ’self-aware,’ or are ’alive.’ The Turing test aims to avoid answering these questions altogether by simply asking if a machine can be distinguished from a human by another human. At the end of the day, chatGPT is nothing more than a bunch of weights!Contemporary AIs–chatGPT included–have very limited levers to pull. They simply can’t take many actions. Indeed, chatGPT’s only action is to create text in response to a prompt. It cannot do anything independently. Its effects, for now, are limited to passing through the hands of humans and to the social changes it could thereby create.The call for a moratorium emphasizes ‘control’ over AI. It is worth asking just what this control means. Take chatGPT as an example–can its makers control responses to prompts? Probably only in a limited fashion at best, with less and less ability as more people use it. There simply aren’t resources to police its responses. Can chatGPT’s makers ‘flip the off switch?’ Absolutely – restricting access to the API would effectively turn chatGPT off. In that sense, it is certainly under the same kind of control humans subjected to government are.Keep in mind that there are coordination problems – just because there is an AI moratorium in the US does not mean that other countries–particularly US adversaries– will stop development. And as others have said: “as long as AI systems have objectives set by humans, most ethics concerns related to artificial intelligence come from the ethics of the countries wielding them.”There are definitional problems with this sort of moratorium – who would be subject to it? Industry actors? Academics? The criterion those who call for the moratorium use is “AI systems more powerful than GPT-4.” What does “powerful” mean? Enforcement requires drawing boundaries around which AI development is subject to a moratorium – without those boundaries how would such a policy be enforced?It might already be too late – some already claim that they’ve recreated chatGPT.There are two major groups to think about when looking for develop regulatory solutions for AI: academia and industry. There may already be good vehicles for regulating academic research, for example oversight of grant funding. Oversight of AI development in industry is an area that requires attention and application of expertise.If you're a journalist covering Artificial Intelligence, then let us help. Dr. Kedziora is a respected expert in Data Science, Machine Learning, Statistical Modeling, Bayesian Inference, Game Theory and things AI. He's available to speak with the media simply click on the icon now to arrange an interview today.

Jeremy Kedziora, Ph.D.

Multimedia

Education, Licensure and Certification

Ph.D.

Political Science

University of Rochester

2012

B.S.

Chemistry and Political Science

University of Wisconsin–Madison

2004

Biography

Dr. Jeremy Kedziora is an award-winning researcher and scientist with 17 years of experience developing new methods in machine learning, Bayesian inference, and game theory. Previously, Kedziora was a director of data science and analytics at Northwestern Mutual, where he managed the development of cybersecurity machine learning and at Giant Oak where he focused on natural language processing.

Kedziora also served for nine years at the Central Intelligence Agency as a chief methodologist where he led applied R&D efforts in data science and modeling. He holds a Ph.D. in Political Science from the University of Rochester and teaches at Milwaukee School of Engineering, where he is the PieperPower Endowed Chair for Artificial Intelligence.

Areas of Expertise

Data Science
Machine Learning
Statistical Modeling
Bayesian Inference
Game Theory

Accomplishments

Directors Award, Central Intelligence Agency

2011, 2014, 2017

Cutting Edge Innovation Award, Central Intelligence Agency

2017

Paper of the Year, Central Intelligence Agency

2017

Show All +

Affiliations

  • American Political Science Association
  • International Studies Association
  • Midwest Political Science Association
  • Peace Science Society
  • Society for Political Methodology

Social

Media Appearances

MSOE names Dr. Jeremy Kedziora as Endowed Chair in Artificial Intelligence

MSOE News  online

2023-03-22

Jeremy Kedziora, Ph.D. has been named the PieperPower Endowed Chair in Artificial Intelligence at Milwaukee School of Engineering.

“Artificial intelligence and machine learning are part of everyday life at home and work. Businesses and industries—from manufacturing to health care and everything in between—are using them to solve problems, improve efficiencies and invent new products,” said Dr. John Walz, MSOE president. “We are excited to welcome Dr. Jeremy Kedziora as MSOE’s first PieperPower Endowed Chair in Artificial Intelligence. With MSOE as an educational leader in this space, it is imperative that our students are prepared to develop and advance AI and machine learning technologies while at the same time implementing them in a responsible and ethical manner.”

The PieperPower Endowed Chair in Artificial Intelligence was made possible through a $2.5 million gift from Pieper Electric Inc. and the PPC Foundation Inc. This new role at MSOE further positions the university at the forefront of artificial intelligence education and next generation technologies.

Kedziora will hold a full-time faculty position in the Electrical Engineering and Computer Science Department at MSOE and will pursue research advancing the interaction of artificial intelligence with humans and its potential impact.

View More

Beyond Google: New Approaches to Indexing the Open and Deep Web

Giant Oak Insights  online

2020-05-15

On Thursday, May 14 Giant Oak and #NatSecGirlSquad presented the webinar "Beyond Google: New Approaches to Indexing the Open and Deep Web" with speaker Dr. Jeremy Kedziora, Giant Oak Director of Science. During this session, Dr Kedziora discussed the basics of indexing the web as well as the differences between traditional search engines and customizable machine-learning tools, such as GOST®. The full recording is available below.

View More

Selected Publications

Systematic review on effects of bioenergy from edible versus inedible feedstocks on food security

npj Science of Food

Ahmed, Selena, Teresa Warne, Erin Smith, Hannah Goemann, Greta Linse, Mark Greenwood, Jeremy Kedziora et al.

2021

Achieving food security is a critical challenge of the Anthropocene that may conflict with environmental and societal goals such as increased energy access. The "fuel versus food" debate coupled with climate mitigation efforts has given rise to next-generation biofuels. Findings of this systematic review indicate just over half of the studies (56% of 224 publications) reported a negative impact of bioenergy production on food security. However, no relationship was found between bioenergy feedstocks that are edible versus inedible and food security (P value = 0.15). A strong relationship was found between bioenergy and type of food security parameter (P value < 0.001), sociodemographic index of study location (P value = 0.001), spatial scale (P value < 0.001), and temporal scale (P value = 0.017). Programs and policies focused on bioenergy and climate mitigation should monitor multiple food security parameters at various scales over the long term toward achieving diverse sustainability goals.

View more

Organizing for violence

University of Rochester

Kedziora, J.T.

2013

This dissertation consists of three essays that analyze the emergence of the state and how the logic of its organization shapes its major pursuit: war. In the first essay, I develop a theory of state formation in which a sovereign must delegate policy implementation to local barons, aware that doing so empowers those barons to act against him. I find that militarily weak sovereigns construct federations by relying on barons whose policy preferences mirror their own while militarily strong sovereigns construct centralized states by relying on barons whose policy preferences are far from their own. My second essay analyzes how rulers formulate war aims given the need to bargain with subjects over wartime resource allocation. I find that the relationship between the effect of military victory on the future course of the war and the effect of military victory on diplomacy emerges as the central factor influencing both subject resolve and ruler war aims. This logic suggests a number of important substantive results, for example, that democracies extract more resources during capital intensive wars while autocracies extract more resources during labor intensive wars, and that material inequality limits war aims. In the final essay I leverage the importance of bargaining between ruler and subject in wartime resource allocation to advance a Bayesian methodology for measuring the uncertainty facing states involved in disputes over the war-fighting capabilities of their opponents. I then analyze the probability of a militarized interstate dispute escalating to war and find that it increases greatly in the presence of uncertainty over capabilities/resolve.

View more

Powered by